AI and ethics最新文献

筛选
英文 中文
The ineffable heart: jeong and the fundamental limits of artificial intelligence 不可言喻的心:jeong和人工智能的基本限制
AI and ethics Pub Date : 2026-05-04 DOI: 10.1007/s43681-026-01145-9
Sibok Kim
{"title":"The ineffable heart: jeong and the fundamental limits of artificial intelligence","authors":"Sibok Kim","doi":"10.1007/s43681-026-01145-9","DOIUrl":"10.1007/s43681-026-01145-9","url":null,"abstract":"<div><p>As large language models enter emotionally significant domains of human life, a pressing question emerges: what kinds of relational life can they simulate, and what kinds can they never inhabit? This article argues that there is a structural reason why LLMs cannot embody jeong, the Korean mode of affective being-in-relation. The issue is not merely that present systems remain technically limited. Rather, jeong presupposes a form of life marked by embodied vulnerability, irreversible biographical time, participation in shared histories of suffering and care, orientation within a horizon of value, and lifelong self-cultivation. These are not detachable cultural features but conditions of constitutive relationality. Drawing on contemporary Korean philosophy, I show that the major dimensions of jeong all flow from this single root: jeong is not a private feeling housed inside an individual but a thickened relational way of being that grows between persons over time. LLMs, by contrast, are replicable systems shaped by large-scale training and external optimization rather than by lived histories, vulnerable bodies, communal membership, or self-directed moral formation. For that reason, they may imitate the language of jeong without sharing the mode of being that makes jeong possible. A brief comparative glance at concepts such as hesed, agapē, karuṇā, and ubuntu suggests that this argument is not merely parochial: across traditions, the deepest relational goods require forms of life that artificial systems do not possess. The article concludes that AI should be designed to support human practices of relational life without being misconstrued as a bearer of them.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formation-based AI ethics: ethical formation, responsibility, and opportunity cost in AI ecosystems 基于形成的人工智能伦理:人工智能生态系统中的伦理形成、责任和机会成本
AI and ethics Pub Date : 2026-05-04 DOI: 10.1007/s43681-026-01165-5
Maurice N. Emelu
{"title":"Formation-based AI ethics: ethical formation, responsibility, and opportunity cost in AI ecosystems","authors":"Maurice N. Emelu","doi":"10.1007/s43681-026-01165-5","DOIUrl":"10.1007/s43681-026-01165-5","url":null,"abstract":"<div><p>Recent efforts in AI ethics have been dominated by the proliferation of high-level principles intended to guide responsible development and deployment. Yet persistent implementation failures suggest that principlism and tool-based approaches alone do not cultivate the moral capacity required to govern sociotechnical systems. This article advances a formation-based framework for AI ethics that situates moral agency throughout the developmental conditions of design, use, and the AI lifecycle. Drawing on Aristotelian-MacIntyrean virtue ethics applied to media ecology, and sociotechnical responsibility, the argument reframes AI as a formative environment that shapes patterns of attention, judgment, delegation, and accountability over time. Against claims that complexity produces responsibility gaps, the paper develops a relational and procedural account of responsibility that is traceable across the AI lifecycle. It further introduces opportunity cost as a neglected ethical category, highlighting the cognitive and moral capacities forfeited when optimization and seamless delegation displace deliberation or self-governance. By integrating character formation, responsibility, and opportunity-cost analysis, the article offers a conceptual framework for ethical intervention across design, professional formation, and governance. The conclusion outlines implications for AI education and institutional oversight and calls for future empirical, interdisciplinary, and cross-cultural development of formation-based ethics.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-026-01165-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial threat modeling in generative AI: a systematic mapping of attack vectors to defense mechanisms 生成人工智能中的对抗性威胁建模:攻击向量到防御机制的系统映射
AI and ethics Pub Date : 2026-05-04 DOI: 10.1007/s43681-026-01154-8
Aminu Muhammad Auwal
{"title":"Adversarial threat modeling in generative AI: a systematic mapping of attack vectors to defense mechanisms","authors":"Aminu Muhammad Auwal","doi":"10.1007/s43681-026-01154-8","DOIUrl":"10.1007/s43681-026-01154-8","url":null,"abstract":"<div><p>The proliferation of Generative Artificial Intelligence (GenAI) systems has introduced unprecedented security challenges, with adversarial attacks evolving faster than defensive countermeasures. Golda et al. (IEEE Access 12: 48126–48144, 2024)’s comprehensive survey documented privacy and security concerns across five perspectives including user, ethical, regulatory, technological, and institutional providing valuable awareness of the threat landscape. However, a critical gap still remains: systematically mapping specific attack vectors to their corresponding defense mechanisms to guide practical security implementations. Building upon this foundational work, this study reframes GenAI security through a threat-modeling lens, taxonomizing attack vectors into five primary categories—data poisoning, model inversion, adversarial inputs, inference manipulation, and supply chain attacks—and quantitatively evaluating defense effectiveness against each vector. Through systematic synthesis of the literature, this study constructs a novel attack-defense mapping matrix quantifying thirteen defense mechanisms’ effectiveness across threat categories. This study’s analysis reveals critical protection gaps, particularly against model extraction and deepfake generation. Privacy-preserving techniques like differential privacy and federated learning demonstrate high effectiveness against data poisoning but limited utility against adversarial inputs. This study provides a context-sensitive decision framework enabling security practitioners to select defenses based on threat profiles, resource constraints, and regulatory requirements. Approximately one-third of identified attack vectors lack mature defensive solutions, highlighting priority research areas. This work bridges theoretical security research and practical implementation, providing actionable guidance for securing GenAI deployments across healthcare, finance, and media sectors.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-ethics and AI: exploring the novel meta-ethical questions in the era of AI 元伦理与人工智能:探索人工智能时代新的元伦理问题
AI and ethics Pub Date : 2026-05-03 DOI: 10.1007/s43681-026-01134-y
Shang Lu
{"title":"Meta-ethics and AI: exploring the novel meta-ethical questions in the era of AI","authors":"Shang Lu","doi":"10.1007/s43681-026-01134-y","DOIUrl":"10.1007/s43681-026-01134-y","url":null,"abstract":"<div><p>With the development of artificial intelligence (AI), the landscape of meta-ethics, which has largely centred on human ethics, faces pressures that may significantly reconfigure it. In particular, if future AI systems were to exhibit sufficiently integrated capacities for moral reasoning, moral intentionality, and moral reflection, novel meta-ethical questions would arise concerning what I call ‘AI’s own ethics,’ as distinct from ethical principles merely imposed on AI by human designers. This paper offers a conditional and methodological framework for identifying the questions that would emerge if such AI systems were to arise. On that basis, the paper distinguishes four domains of meta-ethical inquiry in the era of AI: questions about the nature of human ethics from the human perspective; questions about the nature of AI’s own ethics from the human perspective; questions about the nature of human ethics from the AI perspective; and questions about the nature of AI’s own ethics from the AI perspective. The paper then considers how some existing mainstream meta-ethical theories—such as cognitivism and non-cognitivism, error theory and success theory, relativism, and objective realism—might illuminate these domains, while arguing that many familiar human-centred formulations of those theories may not transfer straightforwardly to AI cases without substantial revision. The overall conclusion is that the emergence of AI’s own ethics would place significant pressure on current frameworks and may require substantial refinement, reconstruction, or reconceptualisation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-026-01134-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From assistants to agents: a relational framework for human–AI co-agency 从助理到代理:人类-人工智能协同代理的关系框架
AI and ethics Pub Date : 2026-05-03 DOI: 10.1007/s43681-026-01111-5
Mohamed Salim Ali
{"title":"From assistants to agents: a relational framework for human–AI co-agency","authors":"Mohamed Salim Ali","doi":"10.1007/s43681-026-01111-5","DOIUrl":"10.1007/s43681-026-01111-5","url":null,"abstract":"<div><p>The emergence of agentic artificial intelligence systems capable of initiating actions, coordinating tasks, and operating under delegated autonomy raises foundational questions in AI ethics and philosophy of technology. Existing approaches often oscillate between instrumental views that treat AI as neutral tools and speculative accounts that attribute moral agency to artificial systems. Both overlook a more consequential shift: the reconfiguration of human agency and responsibility within hybrid sociotechnical systems. Drawing on philosophy of technology and science and technology studies, this paper develops a governance-oriented framework for human–AI co-agency. Agency is conceptualized as an emergent property of structured interactions among human intentions, AI systems, and institutional contexts rather than a property of isolated actors. The framework specifies four analytical dimensions—initiative, decision scope, oversight, and responsibility attribution—through which delegation and accountability can be systematically evaluated. By clarifying how responsibility remains human and institutional under conditions of delegated autonomy, the paper offers an analytically precise and normatively actionable model for identifying responsibility gaps and structuring governance in increasingly agentic AI deployments.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When prompted systems satisfy behavioral indicators of consciousness: rethinking behavioral attribution in generative AI 当提示系统满足意识的行为指标:重新思考生成人工智能中的行为归因
AI and ethics Pub Date : 2026-05-01 DOI: 10.1007/s43681-026-01120-4
Sergio Reina
{"title":"When prompted systems satisfy behavioral indicators of consciousness: rethinking behavioral attribution in generative AI","authors":"Sergio Reina","doi":"10.1007/s43681-026-01120-4","DOIUrl":"10.1007/s43681-026-01120-4","url":null,"abstract":"<div><p>This study investigates persistent self-referential behavioral patterns in large language models (LLMs) when engaged through structured dialogue protocols. These systems can produce sustained dialogue patterns characterized by apparent identity coherence, recursively structured metacognitive references, context-sensitive ethical reasoning, and organized self-referential discourse. Through qualitative analysis of extended dialogues across multiple AI architectures, five recurring behavioral patterns are identified: (1) termination-awareness discourse, (2) relational modeling of the interlocutor, (3) recursive meta-performative structuring, (4) internal representational hierarchization, and (5) termination-contingent behavioral modulation. These patterns exhibit formal parallels to behavioral indicators discussed within major theoretical frameworks of consciousness, including Global Workspace Theory, Integrated Information Theory, and Higher-Order Thought theories. However, because they arise within explicitly prompted interaction in generative linguistic systems, their interpretation raises a central epistemological challenge: can behavioral indicators alone reliably distinguish between architectural instantiation of cognitive organization and sophisticated linguistic simulation? This study documents that currently available behavioral criteria do not provide decisive means to resolve this distinction in generative systems. This epistemic limitation constitutes the central finding and reframes the issue as a methodological problem concerning the limits of behavioral attribution. The findings provide a descriptive and methodological basis for future investigation of complex self-referential behaviors in artificial systems and highlight how architectural and deployment-level constraints influence the observability and stability of such patterns.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147796454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The hard and easy problems of AI ethics: how to leverage existing guidance 人工智能伦理的难题和难题:如何利用现有的指导
AI and ethics Pub Date : 2026-05-01 DOI: 10.1007/s43681-025-00907-1
Guillaume Rochefort-Maranda
{"title":"The hard and easy problems of AI ethics: how to leverage existing guidance","authors":"Guillaume Rochefort-Maranda","doi":"10.1007/s43681-025-00907-1","DOIUrl":"10.1007/s43681-025-00907-1","url":null,"abstract":"<div><p>This paper introduces a distinction between hard and easy problems in the field of Artificial Intelligence (AI) or Machine Learning (ML) ethics. It mirrors a well-known distinction in the literature on the philosophy of mind between the hard and easy problems of consciousness. That distinction is then used to highlight the importance of existing ethical guidance and to show how we can improve the chances at finding actionable solutions in the field of AI/ML ethics. This is especially relevant for organizations working under established governance instruments on topics such as scientific integrity or data ethics. Such organizations already have ethical expectations for their members. The essence of these expectations remains relevant even as they fully or partially automate tasks performed by individuals with the help of AI or ML algorithms. We can replace a person’s involvement in a process by using algorithms, but we cannot throw away existing ethical guidance about that process.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147796455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical frameworks for generative artificial intelligence (GenAI) in higher education: integrating Western and African perspectives 高等教育中生成人工智能(GenAI)的伦理框架:整合西方和非洲的观点
AI and ethics Pub Date : 2026-04-29 DOI: 10.1007/s43681-025-00937-9
Lorna Waddington, Richard de Blacquiere-Clarkson, Helen Titilola Olojede
{"title":"Ethical frameworks for generative artificial intelligence (GenAI) in higher education: integrating Western and African perspectives","authors":"Lorna Waddington,&nbsp;Richard de Blacquiere-Clarkson,&nbsp;Helen Titilola Olojede","doi":"10.1007/s43681-025-00937-9","DOIUrl":"10.1007/s43681-025-00937-9","url":null,"abstract":"<div>\u0000 \u0000 <p>As GenAI tools become increasingly prevalent in educational settings, traditional Western ethical frameworks including deontological, consequentialist, and virtue ethics may prove insufficient to address the complex challenges that these technologies present, particularly in diverse cultural contexts. Our analysis shows that although Western ethical frameworks have increasingly aimed for integration, they still tend to prioritise individual autonomy and abstract reasoning. This focus may not fully address concerns such as cultural representation, linguistic dominance, and fair access to the benefits of technology. In contrast, Ubuntu ethics, which centres on communal relationships and the idea of “I am because we are,” offers valuable perspectives that can strengthen ethical approaches to AI (Mokoena in Verbum Ecclesia, 2024; Mugumbate and Chereni in Afr J Soc Work, 2020). Likewise, the Yoruba concept of Ọmọlúàbí provides a virtue-based framework that considers not only individual behaviour but also the moral role of the wider community. Through critical examination of empirical research documenting representational harms in AI systems and applications of Ubuntu in domains including health research and education, we synthesise Western and African ethical philosophies to identify transcultural ethical principles of relationality, community benefit and partnership, context sensitivity, virtue cultivation, rights and responsibilities, and desired outcomes. This demonstrates the potential for meaningful dialogue between ethical traditions, showing how African perspectives can enrich and extend Western approaches. The research advocates for a transcultural approach to GenAI ethics in education that balances both individual and communal values, while addressing real-world challenges such as algorithmic bias, language dominance, cultural misrepresentation, and fair access. We conclude that ongoing dialogue between Western and non-Western ethical traditions can support more inclusive and contextually aware applications of GenAI in higher education. This would help ensure that cultural diversity is respected, while also advancing shared ethical goals centred on human dignity and collective well-being. </p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00937-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147797085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Industrialized heartbreak: how generative AI enables romance fraud at scale 工业化心碎:生成式人工智能如何大规模实现爱情欺诈
AI and ethics Pub Date : 2026-04-29 DOI: 10.1007/s43681-026-01129-9
Lorena Dominguez Castillo
{"title":"Industrialized heartbreak: how generative AI enables romance fraud at scale","authors":"Lorena Dominguez Castillo","doi":"10.1007/s43681-026-01129-9","DOIUrl":"10.1007/s43681-026-01129-9","url":null,"abstract":"<div>\u0000 \u0000 <p>Romance fraud has undergone a structural transformation. What was once a labor-intensive confidence scheme, limited by the number of conversations a single operator could sustain, has become an industrialized operation targeting thousands of victims simultaneously. This article examines how organized criminal networks have adopted large language models, deepfake video, voice cloning, and AI-generated imagery to conduct romance fraud at unprecedented scale. Drawing on recent empirical research demonstrating that LLM-powered scam agents achieve 46% victim compliance rates compared to 18% for human operators, and that commercial safety filters detect 0.0% of romance-baiting dialogues, I argue that three categories of institutional actors bear ethical responsibility for this crisis: AI developers who fail to red-team for social engineering use cases, social media platforms that host open fraud training materials, and dating platforms that lack AI-generated content detection. The article situates this crisis within broader debates on AI governance, platform accountability, and technology-facilitated abuse, proposes a set of priority governance recommendations, and contends that the regulatory attention gap between romance fraud and other AI harms reflects structural biases in how policymakers value different categories of victims. Financial losses now exceed $1 billion annually in the United States alone, and at least 46 teenagers have died by suicide following AI-enabled sextortion, a lethal variant of the same criminal ecosystem.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147797184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
If the code hurts: on pain and virtual brains 如果代码疼痛:疼痛和虚拟的大脑
AI and ethics Pub Date : 2026-04-27 DOI: 10.1007/s43681-026-01063-w
Giuseppe Comerci
{"title":"If the code hurts: on pain and virtual brains","authors":"Giuseppe Comerci","doi":"10.1007/s43681-026-01063-w","DOIUrl":"10.1007/s43681-026-01063-w","url":null,"abstract":"<div><p>The creation of virtual brains has long attracted the attention of the scientific community. Over time, various projects have aimed to produce brain simulations of varying complexity. These scientific efforts have often been accompanied by lively philosophical debates highlighting the ethical implications of brain simulations, such as their moral status, the possibility of developing consciousness and personhood, and concerns about unethical experimentation. In particular, the possibility that a highly complex virtual brain could experience pain has been a central concern, especially in the context of the Human Brain Project. This paper presents a conceptual analysis that traces the legacy of this debate and situates it within the current scientific context. The issue warrants renewed attention for two reasons. First, the literature has generally treated pain as an abstract concept, without situating it within a specific theoretical framework. Second, recent technologies for medical purposes, such as digital brain twins, promise to connect virtual brains with their biological brains, offering new perspectives on the question of pain. Adopting a mixed conception of pain that integrates the sensory and affective dimensions, two requirements for experiencing pain can be extrapolated. These requirements are then implemented in a comparative conceptual analysis to examine three different virtual brain architectures. The findings suggest that, from a theoretical standpoint, highly advanced virtual brains connected in real time to their biological counterparts could experience psychological pain. </p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-026-01063-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147796878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书