Journal of responsible technology最新文献

筛选
英文 中文
A turning point in AI: Europe's human-centric approach to technology regulation 人工智能的转折点:欧洲以人为中心的技术监管方法
Journal of responsible technology Pub Date : 2025-07-17 DOI: 10.1016/j.jrt.2025.100128
Yavuz Selim Balcioğlu , Ahmet Alkan Çelik , Erkut Altindağ
{"title":"A turning point in AI: Europe's human-centric approach to technology regulation","authors":"Yavuz Selim Balcioğlu ,&nbsp;Ahmet Alkan Çelik ,&nbsp;Erkut Altindağ","doi":"10.1016/j.jrt.2025.100128","DOIUrl":"10.1016/j.jrt.2025.100128","url":null,"abstract":"<div><div>This article examines the European Union's Artificial Intelligence Act, a landmark legislation that sets forth comprehensive rules for the development, deployment, and governance of artificial intelligence technologies within the EU. Emphasizing a human-centric approach, the Act aims to ensure AI's safe use, protect fundamental rights, and foster innovation within a framework that supports economic growth. Through a detailed analysis, the article explores the Act's key provisions, including its risk-based approach, bans and restrictions on certain AI practices, and measures for safeguarding fundamental rights. It also discusses the potential impact on SMEs, the importance of balancing regulation with innovation, and the need for the Act to adapt in response to technological advancements. The role of stakeholders in ensuring the Act's successful implementation and the significance of this legislative milestone for the future of AI are highlighted. The article concludes with reflections on the opportunities the Act presents for ethical AI development and the challenges ahead in maintaining its relevance and efficacy in a rapidly evolving technological landscape.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft law for unintentional empathy: addressing the governance gap in emotion-recognition AI technologies 无意共情的软法律:解决情感识别人工智能技术的治理差距
Journal of responsible technology Pub Date : 2025-07-16 DOI: 10.1016/j.jrt.2025.100126
Andrew McStay , Vian Bakir
{"title":"Soft law for unintentional empathy: addressing the governance gap in emotion-recognition AI technologies","authors":"Andrew McStay ,&nbsp;Vian Bakir","doi":"10.1016/j.jrt.2025.100126","DOIUrl":"10.1016/j.jrt.2025.100126","url":null,"abstract":"<div><div>Despite regulatory efforts, there is a significant governance gap in managing emotion recognition AI technologies and those that emulate empathy. This paper asks: should international soft law mechanisms, such as ethical standards, complement hard law in addressing governance gaps in emotion recognition and empathy-emulating AI technologies? To argue that soft law can provide detailed guidance, particularly for research ethics committees and related boards advising on these technologies, the paper first explores how legal definitions of emotion recognition, especially in the EU AI Act, rest on reductive and physiognomic criticism of emotion recognition. It progresses to detail that systems may be designed to intentionally empathise with their users, but also that empathy may be unintentional – or effectively incidental to how these systems work. Approaches that are non-reductive and avoid labelling of emotion as conceived in the EU AI Act raises novel governance questions and physiognomic critique of a more dynamic nature. The paper finds that international soft law can complement hard law, especially when critique is subtle but significant, when guidance is anticipatory in nature, and when detailed recommendations for developers are required.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ten simple guidelines for decolonising algorithmic systems 非殖民化算法系统的十条简单准则
Journal of responsible technology Pub Date : 2025-07-15 DOI: 10.1016/j.jrt.2025.100125
Dion R.J. O’Neale , Daniel Wilson , Paul T. Brown , Pascarn Dickinson , Manakore Rikus-Graham , Asia Ropeti
{"title":"Ten simple guidelines for decolonising algorithmic systems","authors":"Dion R.J. O’Neale ,&nbsp;Daniel Wilson ,&nbsp;Paul T. Brown ,&nbsp;Pascarn Dickinson ,&nbsp;Manakore Rikus-Graham ,&nbsp;Asia Ropeti","doi":"10.1016/j.jrt.2025.100125","DOIUrl":"10.1016/j.jrt.2025.100125","url":null,"abstract":"<div><div>As the scope and prevalence of algorithmic systems and artificial intelligence for decision making expand, there is a growing understanding of the need for approaches to help with anticipating adverse consequences and to support the development and deployment of algorithmic systems that are socially responsible and ethically aware. This has led to increasing interest in \"decolonising\" algorithmic systems as a method of managing and mitigating harms and biases from algorithms and for supporting social benefits from algorithmic decision making for Indigenous peoples.</div><div>This article presents ten simple guidelines for giving practical effect to foundational Māori (the Indigenous people of Aotearoa New Zealand) principles in the design, deployment, and operation of algorithmic systems. The guidelines are based on previously established literature regarding ethical use of Māori data. Where possible we have related these guidelines and recommendations to other development practices, for example, to open-source software.</div><div>While not intended to be exhaustive or extensive, we hope that these guidelines are able to facilitate and encourage those who work with Māori data in algorithmic systems to engage with processes and practices that support culturally appropriate and ethical approaches for algorithmic systems.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Participatory research in low resource settings - Endeavours in epistemic justice at the Banyan, India 参与性研究在低资源设置-在榕树认识正义的努力,印度
Journal of responsible technology Pub Date : 2025-06-24 DOI: 10.1016/j.jrt.2025.100123
Mrinalini Ravi , Swarna Tyagi , Vandana Gopikumar , Emma Emily de Wit , Joske Bunders , Deborah Padgett , Barbara Regeer
{"title":"Participatory research in low resource settings - Endeavours in epistemic justice at the Banyan, India","authors":"Mrinalini Ravi ,&nbsp;Swarna Tyagi ,&nbsp;Vandana Gopikumar ,&nbsp;Emma Emily de Wit ,&nbsp;Joske Bunders ,&nbsp;Deborah Padgett ,&nbsp;Barbara Regeer","doi":"10.1016/j.jrt.2025.100123","DOIUrl":"10.1016/j.jrt.2025.100123","url":null,"abstract":"<div><div>Involving persons with lived experience in knowledge generation through participatory research (PR) has become increasingly important to challenge power structures in knowledge production and research. In the case of persons with lived experiences of mental illness, participatory research has gained popularity since the early 70 s, but there is little empirical work from countries like India on how PR can be implemented in psychiatric settings.</div><div>This study focuses on exploring the way persons with lived experiences of mental illness can be engaged as peer researchers in a service utilisation audit of The Banyan’s inpatient, outpatient and inclusive living facilities. The audit was an attempt by The Banyan to co-opt clients as peer-researchers, thereby enhancing participatory approaches to care planning and provision. Notes and transcripts of research process activities (three meetings for training purposes), 180 interviews as part of the audit, as well as follow up Focus Group Discussions (<em>n</em> = 4) conducted with 18 peer researchers, were used to document their experiences and gather feedback on the training and research process.</div><div>We foundthat, reflected against the lack of formal education in the past, the opportunity and support received to be part of a research endeavour, elicited a sense of pride, relief, and liberation in peer researchers. Additionally, actualising the role of an academic and researcher, and not just being passive responders to people in positions of intellectual and systemic power, engendered a sense of responsibility and accountability to peer researchers, and to the mental health system. Thirdly, supporting persons with experiences of mental illness in participatory research activities, especially in the context of low resource settings, requires specific consideration of practical conditions and adjustments needed to avoid tokenism. Finally, both peer- and staff researchers spoke about persisting hierarchies between them which deserve attention.</div><div>We conclude that participatory research has a significant scope amongst clients from disadvantaged communities in low-resource settings. Respondents repeatedly expressed an urgency for persons with lived experience to contribute to mental health pedagogy, and, in so doing, disrupt archaic treatment approaches.. Experiences from this enquiry also call for a rethink on how training in research can be developed for individuals without formal education and with cognitive difficulties, with the help of auditory support systemssuch that key concepts are available and accessible, and long-term memory becomes less of a deterrent in the pursuit of knowledge and truth.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A capability approach to ethical development and internal auditing of AI technology 人工智能技术伦理发展和内部审计的能力方法
Journal of responsible technology Pub Date : 2025-06-01 DOI: 10.1016/j.jrt.2025.100121
Mark Graves , Emanuele Ratti
{"title":"A capability approach to ethical development and internal auditing of AI technology","authors":"Mark Graves ,&nbsp;Emanuele Ratti","doi":"10.1016/j.jrt.2025.100121","DOIUrl":"10.1016/j.jrt.2025.100121","url":null,"abstract":"<div><div>Responsible artificial intelligence (AI) requires integrating ethical awareness into the full process of designing and developing AI, including ethics-based auditing of AI technology. We claim the Capability Approach (CA) of Sen and Nussbaum grounds AI ethics in essential human freedoms and can increase awareness of the moral dimension in the technical decision making of developers and data scientists constructing data-centric AI systems. Our use of CA focuses awareness on the ethical impact that day-to-day technical decisions have on the freedom of data subjects to make choices and live meaningful lives according to their own values. For internal auditing of AI technology development, we design and develop a light-weight ethical auditing tool (LEAT) that uses simple natural language processing (NLP) techniques to search design and development documents for relevant ethical characterizations. We describe how CA guides our design, demonstrate LEAT on both principle- and capabilities-based use cases, and characterize its limitations.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100121"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144243259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robot with human values: assessing value-sensitive design in an agri-food context 具有人类价值观的机器人:评估农业食品环境中的价值敏感设计
Journal of responsible technology Pub Date : 2025-04-27 DOI: 10.1016/j.jrt.2025.100120
Else Giesbers , Kelly Rijswijk , Mark Ryan , Mashiat Hossain , Aneesh Chauhan
{"title":"A robot with human values: assessing value-sensitive design in an agri-food context","authors":"Else Giesbers ,&nbsp;Kelly Rijswijk ,&nbsp;Mark Ryan ,&nbsp;Mashiat Hossain ,&nbsp;Aneesh Chauhan","doi":"10.1016/j.jrt.2025.100120","DOIUrl":"10.1016/j.jrt.2025.100120","url":null,"abstract":"<div><div>Value Sensitive Design (VSD) aims to take societal values on board in the design of innovative technologies. While a lot has been written on VSD and the added value of using it for technology development, limited literature is available on its application to the agri-food sector. This article describes a VSD case study on an agri-food robotic system and reflects on the insights into the added value of using VSD. This paper concludes that while VSD contributes to broadening the perspective of technical researchers about non-technical requirements, its application in this case is constrained by five factors related to the nature of the VSD approach: i) lack of clarity on dealing with conflicting values; ii) the ideal timing of VSD is unclear; iii) VSD lacks effectiveness when technology is outsourced; iv) VSD does not account for time and context specificness of values; and v) the operationalisation of values in VSD.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100120"},"PeriodicalIF":0.0,"publicationDate":"2025-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decision-making on an AI-supported youth mental health app: A multilogue among ethicists, social scientists, AI-researchers, biomedical engineers, young experiential experts, and psychiatrists 关于人工智能支持的青少年心理健康应用程序的决策:伦理学家、社会科学家、人工智能研究人员、生物医学工程师、年轻经验专家和精神科医生之间的多语对话
Journal of responsible technology Pub Date : 2025-04-18 DOI: 10.1016/j.jrt.2025.100119
Dorothee Horstkötter , Mariël Kanne , Simona Karbouniaris , Noussair Lazrak , Maria Bulgheroni , Ella Sheltawy , Laura Giani , Margherita La Gamba , Esmeralda Ruiz Pujadas , Marina Camacho , Finty Royle , Irene Baggetto , Sinan Gülöksüz , Bart Rutten , Jim van Os
{"title":"Decision-making on an AI-supported youth mental health app: A multilogue among ethicists, social scientists, AI-researchers, biomedical engineers, young experiential experts, and psychiatrists","authors":"Dorothee Horstkötter ,&nbsp;Mariël Kanne ,&nbsp;Simona Karbouniaris ,&nbsp;Noussair Lazrak ,&nbsp;Maria Bulgheroni ,&nbsp;Ella Sheltawy ,&nbsp;Laura Giani ,&nbsp;Margherita La Gamba ,&nbsp;Esmeralda Ruiz Pujadas ,&nbsp;Marina Camacho ,&nbsp;Finty Royle ,&nbsp;Irene Baggetto ,&nbsp;Sinan Gülöksüz ,&nbsp;Bart Rutten ,&nbsp;Jim van Os","doi":"10.1016/j.jrt.2025.100119","DOIUrl":"10.1016/j.jrt.2025.100119","url":null,"abstract":"<div><div>This article explores the decision-making processes in the ongoing development of an AI-supported youth mental health app. Document analysis reveals decisions taken during the grant proposal and funding phase and reflects upon reasons <em>why</em> AI is incorporated in innovative youth mental health care. An innovative multilogue among the transdisciplinary team of researchers, covering ethicists, social scientists, AI-experts, biomedical engineers, young experts by experience, and psychiatrists points out <em>which</em> decisions are taken <em>how</em>. This covers i) the role of a biomedical and exposomic understanding of psychiatry as compared to a phenomenological and experiential perspective, ii) the impact and limits of AI-co-creation by young experts by experience and mental health experts, and iii) the different perspectives regarding the impact of AI on autonomy, empowerment and human relationships. The multilogue does not merely highlight different steps taken during human decision-making in AI-development, it also raises awareness about the many complexities, and sometimes contradictions, when engaging in transdisciplinary work, and it points towards ethical challenges of digitalized youth mental health care.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100119"},"PeriodicalIF":0.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143867928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsible AI innovation in the public sector: Lessons from and recommendations for facilitating Fundamental Rights and Algorithms Impact Assessments 公共部门负责任的人工智能创新:促进基本权利和算法影响评估的经验教训和建议
Journal of responsible technology Pub Date : 2025-04-03 DOI: 10.1016/j.jrt.2025.100118
I.M. Muis, J. Straatman, B.A. Kamphorst
{"title":"Responsible AI innovation in the public sector: Lessons from and recommendations for facilitating Fundamental Rights and Algorithms Impact Assessments","authors":"I.M. Muis,&nbsp;J. Straatman,&nbsp;B.A. Kamphorst","doi":"10.1016/j.jrt.2025.100118","DOIUrl":"10.1016/j.jrt.2025.100118","url":null,"abstract":"<div><div>Since the initial development of the Fundamental Rights and Algorithms Impact Assessment (FRAIA) in 2021, there has been an increasing interest from public sector organizations to gain experience with performing a FRAIA in contexts of developing, procuring, and deploying AI systems. In this contribution, we share observations from fifteen FRAIA trajectories performed in the field within the Dutch public sector context. Based on our experiences facilitating these trajectories, we offer a set of recommendations directed at practitioners with the aim of helping organizations make the best use of FRAIA and similar impact assessment instruments. We conclude by calling for the development of an informal FRAIA community in which practical handholds and advice can be shared to promote responsible AI innovation by ensuring that the human decision making around AI and other algorithms is well informed and well documented with respect to the protection of fundamental rights.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100118"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143800374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Piloting a maturity model for responsible artificial intelligence: A portuguese case study 负责任人工智能成熟度模型的试点:一个葡萄牙案例研究
Journal of responsible technology Pub Date : 2025-04-02 DOI: 10.1016/j.jrt.2025.100117
Rui Miguel Frazão Dias Ferreira , António GRILO , Maria MAIA
{"title":"Piloting a maturity model for responsible artificial intelligence: A portuguese case study","authors":"Rui Miguel Frazão Dias Ferreira ,&nbsp;António GRILO ,&nbsp;Maria MAIA","doi":"10.1016/j.jrt.2025.100117","DOIUrl":"10.1016/j.jrt.2025.100117","url":null,"abstract":"<div><div>Recently, frameworks and guidelines aiming to assist trustworthiness in organizations and assess ethical issues related to the development and use of Artificial Intelligence (AI) have been translated into self-assessment checklists and other instruments. However, such tools can be very time consuming to apply. Aiming to develop a more practical tool, an Industry-Wide Maturity Model for Responsible AI was piloted in 3 companies and 2 research centres, in Portugal. Results show that organizations are aware of requirements (44 %) to deploy a responsible AI approach and have a reactive response to its implementation, as they are willing to integrate other requirements (33 %) into their business processes. The proposed Model was welcomed and showed openness from companies to consistently use it, since it helped to identify gaps and needs when it comes to foster a more trustworthy approach to the development and deployment of AI.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100117"},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The ethics of bioinspired animal-robot interaction: A relational meta-ethical approach 受生物启发的动物与机器人互动的伦理:一种关系的元伦理方法
Journal of responsible technology Pub Date : 2025-03-22 DOI: 10.1016/j.jrt.2025.100116
Marco Tamborini
{"title":"The ethics of bioinspired animal-robot interaction: A relational meta-ethical approach","authors":"Marco Tamborini","doi":"10.1016/j.jrt.2025.100116","DOIUrl":"10.1016/j.jrt.2025.100116","url":null,"abstract":"<div><div>In this article, I focus on a specific aspect of biorobotics: biohybrid interaction between bioinspired robots and animals. My goal is to analyze the ethical and epistemic implications of this practice, starting with a central question<em>:</em> Is it ethically permissible to have a bioinspired robot that mimics and reproduces the behaviors and/or morphology of an animal interact with a particular population, even if the animals do not know that the object they are interacting with is a robot and not a conspecific? My answer to the ethical question is that the interaction between animals and bioinspired robots is ethically acceptable if the animal actively participates in the language game (sense Coeckelbergh) established with the robot. I proceed as follows: First, I define the field of biorobotics and describe its four macro-categories. Second, I present concrete examples of interactive biorobotics, showing two emblematic cases in which the relationship between bioinspired robots and animals plays a central role. Third, I address one key issue—among many—in applied ethics regarding my ethical question. Fourth, I explore the ethical question on a metaethical level, making use of the theories of David Gunkel and Mark Coeckelbergh, as well as the linguistic approach and ethics of the late Ludwig Wittgenstein. Last, I argue that from a meta-ethical approach the original ethical question turns out to be misplaced. The ethical boundary lies not in the distinction between a real or fake relationship between the robot and the organism, but in the degree of mutual participation and understanding between the entities involved.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100116"},"PeriodicalIF":0.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信