AI & Society最新文献

筛选
英文 中文
Cuteness in avatar design: a cross-cultural study on the influence of baby schema features and other visual characteristics 头像设计中的可爱性:关于婴儿图式特征和其他视觉特征影响的跨文化研究
IF 2.9
AI & Society Pub Date : 2024-02-26 DOI: 10.1007/s00146-024-01878-3
Shiri Lieber-Milo, Yair Amichai-Hamburger, Tomoko Yonezawa, Kazunori Sugiura
{"title":"Cuteness in avatar design: a cross-cultural study on the influence of baby schema features and other visual characteristics","authors":"Shiri Lieber-Milo,&nbsp;Yair Amichai-Hamburger,&nbsp;Tomoko Yonezawa,&nbsp;Kazunori Sugiura","doi":"10.1007/s00146-024-01878-3","DOIUrl":"10.1007/s00146-024-01878-3","url":null,"abstract":"<div><p>The concept of cuteness, which can evoke positive emotions in people, is an essential aspect to consider in artificial intelligence design. This study aimed to investigate whether the use of baby schema designed avatars in computer-mediated communication elicits higher positive attitudes than neutral avatars and whether the ethnicity of the cute avatars influences individuals' perceived level of cuteness. 485 participants from Israel and Japan viewed six avatar images, including three baby schema avatars of different visual characteristics and ethnicities (Caucasian, Asian, and Black) and three neutral avatars. Participants rated their attitudes on each avatar, and the results revealed that the baby schema designed avatars were rated cuter, more likable, approachable, and pleasant than the neutral mature avatars. Cultural differences were also evident, as the Caucasian baby schema avatar was rated cuter among Japanese participants, while the Asian and Black baby schema avatars were rated cuter among Israeli respondents. The study findings suggest that cute avatar design can serve as a powerful tool for promoting positive interactions in computer-mediated communication, especially in cultures that highly value cuteness, such as Japan. However, the subjective nature of cuteness is evident as attitudes toward cuteness varied significantly across cultures and individuals. This research highlights the significance of cultural diversity and emphasizes the importance of considering cuteness as a crucial aspect of artificial intelligence design, particularly when creating avatars intended to elicit positive emotions from users. Therefore, designers should be mindful of potential cultural and individual differences in the perception of cuteness while developing avatars for various applications.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"627 - 637"},"PeriodicalIF":2.9,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01878-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140428509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Narrativity and responsible and transparent ai practices 叙事性与负责任和透明的 AI 实践
IF 2.9
AI & Society Pub Date : 2024-02-25 DOI: 10.1007/s00146-024-01881-8
Paul Hayes, Noel Fitzpatrick
{"title":"Narrativity and responsible and transparent ai practices","authors":"Paul Hayes,&nbsp;Noel Fitzpatrick","doi":"10.1007/s00146-024-01881-8","DOIUrl":"10.1007/s00146-024-01881-8","url":null,"abstract":"<div><p>This paper builds upon recent work in narrative theory and the philosophy of technology by examining the place of transparency and responsibility in discussions of AI, and what some of the implications of this might be for thinking ethically about AI and especially AI practices, that is, the structured social activities implicating and defining what AI is. In this paper, we aim to show how pursuing a narrative understanding of technology and AI can support knowledge of process and practice through transparency, as well help summon us to responsibility through visions of possibility and of actual harms arising from AI practices. We provide reflections on the relations between narrative, transparency and responsibility, building an argument that narratives (about AI, practices, and those persons implicated in its design, implementation, and deployment) support the kind of knowing and understanding that is the aim of transparency, and, moreover, that such knowledge supports responsibility in informing agents and activating responsibility through creating knowledge about something that can and should be responded to. Furthermore, we argue for considering an expansion of the kinds of practices that we might legitimately consider ‘AI practices’ given the diverse set of (often materially embedded) activities that sustain and are sustained by AI that link directly to its ethical acceptability and which are rendered transparent in the narrative mode. Finally, we argue for an expansion of narratives and narrative sources to be considered in questions of AI, understanding that transparency is multi-faceted and found in stories from diverse sources and people.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"605 - 625"},"PeriodicalIF":2.9,"publicationDate":"2024-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01881-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140432632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration 公众对在国防中使用人工智能的看法:定性探索
IF 2.9
AI & Society Pub Date : 2024-02-25 DOI: 10.1007/s00146-024-01871-w
Lee Hadlington, Maria Karanika-Murray, Jane Slater, Jens Binder, Sarah Gardner, Sarah Knight
{"title":"Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration","authors":"Lee Hadlington,&nbsp;Maria Karanika-Murray,&nbsp;Jane Slater,&nbsp;Jens Binder,&nbsp;Sarah Gardner,&nbsp;Sarah Knight","doi":"10.1007/s00146-024-01871-w","DOIUrl":"10.1007/s00146-024-01871-w","url":null,"abstract":"<div><p>There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. This study is the first to explore public perceptions of and attitudes towards AI in Defence. A series of four focus groups were conducted with 20 members of the UK public, aged between 18 and 70, to explore their perceptions and attitudes towards AI use in general contexts and, more specifically, applications of AI in Defence settings. Thematic analysis revealed four themes and eleven sub-themes, spanning the role of humans in the system, the ethics of AI use in Defence, trust in AI versus trust in the organisation, and gathering information about AI in Defence. Participants demonstrated a variety of misconceptions about the applications of AI in Defence, with many assuming that a variety of different technologies involving AI are already being used. This highlighted a confluence between information from reputable sources combined with narratives from the mass media and conspiracy theories. The study demonstrates gaps in knowledge and misunderstandings that need to be addressed, and offers practical insights for keeping the public reliably, accurately, and adequately informed about the capabilities, limitations, benefits, and risks of AI in Defence.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"277 - 290"},"PeriodicalIF":2.9,"publicationDate":"2024-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01871-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140432976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surveying Judges about artificial intelligence: profession, judicial adjudication, and legal principles 调查法官对人工智能的看法:职业、司法裁决和法律原则
IF 2.9
AI & Society Pub Date : 2024-02-23 DOI: 10.1007/s00146-024-01869-4
Andreia Martinho
{"title":"Surveying Judges about artificial intelligence: profession, judicial adjudication, and legal principles","authors":"Andreia Martinho","doi":"10.1007/s00146-024-01869-4","DOIUrl":"10.1007/s00146-024-01869-4","url":null,"abstract":"<div><p>Artificial Intelligence (AI) is set to bring changes to legal systems. These technologies may have positive practical implications when it comes to access, efficiency, and accuracy in Justice. However, there are still many uncertainties and challenges associated with the implementation of AI in the legal space. In this research, we surveyed Judges on critical challenges related to the <i>Judging Profession</i> in the AI paradigm; <i>Automated Adjudication</i>; and <i>Legal Principles</i>. Our results suggest that (i) Judges are hesitant about changes in their profession. They signal the need for adequate training that fosters legal literacy in AI, but are less open to changes in legal writing or their social and institutional role; (ii) Judges believe higher levels of automation only lead to fair outcomes if used in earlier phases of adjudication; (iii) Judges believe and are concerned about AI leading to Techno-Legal Positivism; and (iv) Judges consider that Legal AI technologies may have a positive impact in some legal principles, as long as everyone has equal access to those technologies and <i>cybersecurity</i> and <i>judge on the loop</i> safeguards are in place; and (v) Judges are strongly concerned about the <i>de-humanization of Justice</i>. They consider that assessing evidence, analyzing arguments, and deciding on a legal case should be inherently human. By surveying these practitioners, we aim to foster a responsible, inclusive, and transparent innovation in Justice.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"569 - 584"},"PeriodicalIF":2.9,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140437280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Freedom, AI and God: why being dominated by a friendly super-AI might not be so bad 自由、人工智能和上帝:为什么被友好的超级人工智能支配可能并不那么糟糕?
IF 2.9
AI & Society Pub Date : 2024-02-23 DOI: 10.1007/s00146-024-01863-w
Morgan Luck
{"title":"Freedom, AI and God: why being dominated by a friendly super-AI might not be so bad","authors":"Morgan Luck","doi":"10.1007/s00146-024-01863-w","DOIUrl":"10.1007/s00146-024-01863-w","url":null,"abstract":"<div><p>One response to the existential threat posed by a super-intelligent AI is to design it to be friendly to us. Some have argued that even if this were possible, the resulting AI would treat us as we do our pets. Sparrow (AI &amp; Soc. https://doi.org/10.1007/s00146-023-01698-x, 2023) argues that this would be a bad outcome, for such an AI would dominate us—resulting in our freedom being diminished (Pettit in Just freedom: A moral compass for a complex world. WW Norton &amp; Company, 2014). In this paper, I consider whether this would be such a bad outcome.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"291 - 298"},"PeriodicalIF":2.9,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01863-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140436739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User-centered AI-based voice-assistants for safe mobility of older people in urban context 以用户为中心的人工智能语音助手,促进城市老年人的安全出行
IF 2.9
AI & Society Pub Date : 2024-02-21 DOI: 10.1007/s00146-024-01865-8
Bokolo Anthony Jnr.
{"title":"User-centered AI-based voice-assistants for safe mobility of older people in urban context","authors":"Bokolo Anthony Jnr.","doi":"10.1007/s00146-024-01865-8","DOIUrl":"10.1007/s00146-024-01865-8","url":null,"abstract":"<div><p>Voice-assistants are becoming increasingly popular and can be deployed to offers a low-cost tool that can support and potentially reduce falls, injuries, and accidents faced by older people within the age of 65 and older. But, irrespective of the mobility and walkability challenges faced by the aging population, studies that employed Artificial Intelligence (AI)-based voice-assistants to reduce risks faced by older people when they use public transportation and walk in built environment are scarce. This is because the development of AI-based voice-assistants suitable for the mobility domain presents several techno–social challenges. Accordingly, this study aims to identify <i>user-centered</i> service design and functional requirements, techno–social factors, and further design an architectural model for an AI-based voice-assistants that provide personalized recommendation to reduce falls, injuries, and accidents faced by older people. Accordingly, a scoping review of the literature grounded on secondary data from 59 studies was conducted and descriptive analysis of the literature and content-related analysis of the literature was carried out. Findings from this study presents the perceived techno-socio factors that may influences older people use of AI-based voice-assistants. More importantly, this study presents user-centred service design and functional requirements needed to be considered in developing voice-assistants suitable for older people. Implications from this study provides AI techniques for implementing voice-assistants that provide safe mobility, walkability, and wayfinding for older people in urban areas.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"545 - 568"},"PeriodicalIF":2.9,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01865-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140442276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Negotiating the authenticity of AI: how the discourse on AI rejects human indeterminacy 协商人工智能的真实性:关于人工智能的讨论如何拒绝人类的不确定性
IF 2.9
AI & Society Pub Date : 2024-02-20 DOI: 10.1007/s00146-024-01884-5
Siri Beerends, Ciano Aydin
{"title":"Negotiating the authenticity of AI: how the discourse on AI rejects human indeterminacy","authors":"Siri Beerends,&nbsp;Ciano Aydin","doi":"10.1007/s00146-024-01884-5","DOIUrl":"10.1007/s00146-024-01884-5","url":null,"abstract":"<div><p>In this paper, we demonstrate how the language and reasonings that academics, developers, consumers, marketers, and journalists deploy to accept or reject AI as authentic intelligence has far-reaching bearing on how we understand our human intelligence and condition. The discourse on AI is part of what we call the “authenticity negotiation process” through which AI’s “intelligence” is given a particular meaning and value. This has implications for scientific theory, research directions, ethical guidelines, design principles, funding, media attention, and the way people relate to and act upon AI. It also has great impact on humanity’s self-image and the way we negotiate what it means to be human, existentially, culturally, politically, and legally. We use a discourse analysis of academic papers, AI education programs, and online discussions to demonstrate how AI itself, as well as the products, services, and decisions delivered by AI systems are negotiated as authentic or inauthentic intelligence. In this negotiation process, AI stakeholders indirectly define and essentialize what being human(like) means. The main argument we will develop is that this process of indirectly defining and essentializing humans results in an elimination of the space for humans to be indeterminate. By eliminating this space and, hence, denying indeterminacy, the existential condition of the human being is jeopardized. Rather than re-creating humanity in AI, the AI discourse is re-defining what it means to be human and how humanity is valued and should be treated.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 2","pages":"263 - 276"},"PeriodicalIF":2.9,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01884-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140447441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges of responsible AI in practice: scoping review and recommended actions 负责任的人工智能在实践中面临的挑战:范围审查和建议采取的行动
IF 2.9
AI & Society Pub Date : 2024-02-19 DOI: 10.1007/s00146-024-01880-9
Malak Sadek, Emma Kallina, Thomas Bohné, Céline Mougenot, Rafael A. Calvo, Stephen Cave
{"title":"Challenges of responsible AI in practice: scoping review and recommended actions","authors":"Malak Sadek,&nbsp;Emma Kallina,&nbsp;Thomas Bohné,&nbsp;Céline Mougenot,&nbsp;Rafael A. Calvo,&nbsp;Stephen Cave","doi":"10.1007/s00146-024-01880-9","DOIUrl":"10.1007/s00146-024-01880-9","url":null,"abstract":"<div><p>Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, we introduce a number of approaches to RAI from a range of disciplines, exploring their potential as solutions to the identified challenges. We anchor these solutions in practice through concrete examples, bridging the gap between the theoretical considerations of RAI and on-the-ground processes that currently shape how AI systems are built. Our work considers the socio-technical nature of RAI limitations and the resulting necessity of producing socio-technical solutions.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 1","pages":"199 - 215"},"PeriodicalIF":2.9,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01880-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140449379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance 国防人工智能的伦理治理:从原则到实践指导的规范权衡
IF 2.9
AI & Society Pub Date : 2024-02-19 DOI: 10.1007/s00146-024-01866-7
Alexander Blanchard, Christopher Thomas, Mariarosaria Taddeo
{"title":"Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance","authors":"Alexander Blanchard,&nbsp;Christopher Thomas,&nbsp;Mariarosaria Taddeo","doi":"10.1007/s00146-024-01866-7","DOIUrl":"10.1007/s00146-024-01866-7","url":null,"abstract":"<div><p>The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the <i>what</i> to the <i>how</i> of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative choices and corresponding tradeoffs that are involved in specifying guidance for the implementation of AI ethics principles in the defence domain. These correspond to: the AI lifecycle model used; the scope of stakeholder involvement; the accountability goals chosen; the choice of auditing requirements; and the choice of mechanisms for transparency and traceability. We provide initial recommendations for navigating these tradeoffs and highlight the importance of a pro-ethical institutional culture.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 1","pages":"185 - 198"},"PeriodicalIF":2.9,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01866-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139958505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What makes full artificial agents morally different 全人工代理人在道德上有何不同
IF 2.9
AI & Society Pub Date : 2024-02-18 DOI: 10.1007/s00146-024-01867-6
Erez Firt
{"title":"What makes full artificial agents morally different","authors":"Erez Firt","doi":"10.1007/s00146-024-01867-6","DOIUrl":"10.1007/s00146-024-01867-6","url":null,"abstract":"<div><p>In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown AMAs and presents the following argument: the creation of full-blown artificial moral agents, endowed with intentional mental states and moral emotions, and trained to align with human values, does not, by itself, guarantee that these systems will have <i>human</i> morality. Therefore, it is questionable whether they will be inclined to honor and follow what they perceive as incorrect moral values. we do not intend to claim that there is such a thing as a universally shared human morality, only that as there are different human communities holding different sets of moral values, the moral systems or values of the discussed artificial agents would be different from those held by human communities, for reasons we discuss in the paper.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 1","pages":"175 - 184"},"PeriodicalIF":2.9,"publicationDate":"2024-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01867-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139959346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信