Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency最新文献

筛选
英文 中文
Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning 使用对抗性学习减少词嵌入中人口统计属性的情感极性
Chris Sweeney, M. Najafian
{"title":"Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning","authors":"Chris Sweeney, M. Najafian","doi":"10.1145/3351095.3372837","DOIUrl":"https://doi.org/10.1145/3351095.3372837","url":null,"abstract":"The use of word embedding models in sentiment analysis has gained a lot of traction in the Natural Language Processing (NLP) community. However, many inherently neutral word vectors describing demographic identity have unintended implicit correlations with negative or positive sentiment, resulting in unfair downstream machine learning algorithms. We leverage adversarial learning to decorrelate demographic identity term word vectors with positive or negative sentiment, and re-embed them into the word embeddings. We show that our method effectively minimizes unfair positive/negative sentiment polarity while retaining the semantic accuracy of the word embeddings. Furthermore, we show that our method effectively reduces unfairness in downstream sentiment regression and can be extended to reduce unfairness in toxicity classification tasks.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126784642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Integrating FATE/critical data studies into data science curricula: where are we going and how do we get there? 将FATE/关键数据研究整合到数据科学课程中:我们要去哪里?我们如何到达那里?
J. Bates, D. Cameron, Alessandro Checco, Paul D. Clough, F. Hopfgartner, Suvodeep Mazumdar, L. Sbaffi, Peter Stordy, Antonio de la Vega de León
{"title":"Integrating FATE/critical data studies into data science curricula: where are we going and how do we get there?","authors":"J. Bates, D. Cameron, Alessandro Checco, Paul D. Clough, F. Hopfgartner, Suvodeep Mazumdar, L. Sbaffi, Peter Stordy, Antonio de la Vega de León","doi":"10.1145/3351095.3372832","DOIUrl":"https://doi.org/10.1145/3351095.3372832","url":null,"abstract":"There have been multiple calls for integrating topics related to fairness, accountability, transparency, ethics (FATE) and social justice into Data Science curricula, but little exploration of how this might work in practice. This paper presents the findings of a collaborative auto-ethnography (CAE) engaged in by a MSc Data Science teaching team based at University of Sheffield (UK) Information School where FATE/Critical Data Studies (CDS) topics have been a core part of the curriculum since 2015/16. In this paper, we adopt the CAE approach to reflect on our experiences of working at the intersection of disciplines, and our progress and future plans for integrating FATE/CDS into the curriculum. We identify a series of challenges for deeper FATE/CDS integration related to our own competencies and the wider socio-material context of Higher Education in the UK. We conclude with recommendations for ourselves and the wider FATE/CDS orientated Data Science community.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122597114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Fairness is not static: deeper understanding of long term fairness via simulation studies 公平不是静态的:通过模拟研究更深入地理解长期的公平
A. D'Amour, Hansa Srinivasan, James Atwood, P. Baljekar, D. Sculley, Yoni Halpern
{"title":"Fairness is not static: deeper understanding of long term fairness via simulation studies","authors":"A. D'Amour, Hansa Srinivasan, James Atwood, P. Baljekar, D. Sculley, Yoni Halpern","doi":"10.1145/3351095.3372878","DOIUrl":"https://doi.org/10.1145/3351095.3372878","url":null,"abstract":"As machine learning becomes increasingly incorporated within high impact decision ecosystems, there is a growing need to understand the long-term behaviors of deployed ML-based decision systems and their potential consequences. Most approaches to understanding or improving the fairness of these systems have focused on static settings without considering long-term dynamics. This is understandable; long term dynamics are hard to assess, particularly because they do not align with the traditional supervised ML research framework that uses fixed data sets. To address this structural difficulty in the field, we advocate for the use of simulation as a key tool in studying the fairness of algorithms. We explore three toy examples of dynamical systems that have been previously studied in the context of fair decision making for bank loans, college admissions, and allocation of attention. By analyzing how learning agents interact with these systems in simulation, we are able to extend previous work, showing that static or single-step analyses do not give a complete picture of the long-term consequences of an ML-based decision system. We provide an extensible open-source software framework for implementing fairness-focused simulation studies and further reproducible research, available at https://github.com/google/ml-fairness-gym.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121795449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 161
Studying up: reorienting the study of algorithmic fairness around issues of power 向上研究:围绕权力问题重新定位算法公平性的研究
Chelsea Barabas, Colin Doyle, JB Rubinovitz, Karthik Dinakar
{"title":"Studying up: reorienting the study of algorithmic fairness around issues of power","authors":"Chelsea Barabas, Colin Doyle, JB Rubinovitz, Karthik Dinakar","doi":"10.1145/3351095.3372859","DOIUrl":"https://doi.org/10.1145/3351095.3372859","url":null,"abstract":"Research within the social sciences and humanities has long characterized the work of data science as a sociotechnical process, comprised of a set of logics and techniques that are inseparable from specific social norms, expectations and contexts of development and use. Yet all too often the assumptions and premises underlying data analysis remain unexamined, even in contemporary debates about the fairness of algorithmic systems. This blindspot exists in part because the methodological toolkit used to evaluate the fairness of algorithmic systems remains limited to a narrow set of computational and legal modes of analysis. In this paper, we expand on Elish and Boyd's [17] call for data scientists to develop more robust frameworks for understanding their work as situated practice by examining a specific methodological debate within the field of anthropology, frequently referred to as the practice of \"studying up\". We reflect on the contributions that the call to \"study up\" has made in the field of anthropology before making the case that the field of algorithmic fairness would similarly benefit from a reorientation \"upward\". A case study from our own work illustrates what it looks like to reorient one's research questions \"up\" in a high-profile debate regarding the fairness of an algorithmic system - namely, pretrial risk assessment in American criminal law. We discuss the limitations of contemporary fairness discourse with regard to pretrial risk assessment before highlighting the insights gained when we reframe our research questions to focus on those who inhabit positions of power and authority within the U.S. court system. Finally, we reflect on the challenges we have encountered in implementing data science projects that \"study up\". In the process, we surface new insights and questions about what it means to ethically engage in data science work that directly confronts issues of power and authority.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133813364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Where do algorithmic accountability and explainability frameworks take us in the real world?: from theory to practice 在现实世界中,算法问责制和可解释性框架将把我们带向何方?从理论到实践
Katarzyna Szymielewicz, A. Bacciarelli, F. Hidvégi, Agata Foryciarz, Soizic Pénicaud, M. Spielkamp
{"title":"Where do algorithmic accountability and explainability frameworks take us in the real world?: from theory to practice","authors":"Katarzyna Szymielewicz, A. Bacciarelli, F. Hidvégi, Agata Foryciarz, Soizic Pénicaud, M. Spielkamp","doi":"10.1145/3351095.3375683","DOIUrl":"https://doi.org/10.1145/3351095.3375683","url":null,"abstract":"This hands-on session takes academic concepts and their formulation in policy initiatives around algorithmic accountability and explainability and tests them against real cases. In small groups we will (1) test selected frameworks on algorithmic accountability and explainability against a concrete case study (that likely constitutes a human rights violation) and (2) test different formats to explain important aspects of an automated decision-making process (such as input data, type of an algorithm used, design decisions and technical parameters, expected outcomes) to various audiences (end users, affected communities, watchdog organisations, public sector agencies and regulators). We invite participants with various backgrounds: researchers, technologists, human rights advocates, public servants and designers.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129474081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Burn, dream and reboot!: speculating backwards for the missing archive on non-coercive computing 燃烧,梦想和重启!:对非强制计算上缺失的存档进行反向推测
Helen Pritchard, E. Snodgrass, R. Morrison, Loren Britton, Joana Moll
{"title":"Burn, dream and reboot!: speculating backwards for the missing archive on non-coercive computing","authors":"Helen Pritchard, E. Snodgrass, R. Morrison, Loren Britton, Joana Moll","doi":"10.1145/3351095.3375697","DOIUrl":"https://doi.org/10.1145/3351095.3375697","url":null,"abstract":"Whether one is speaking of barbed wire, the assembly line or computer operating systems, the history of coercive technologies for the automation of tasks has focused on optimization, determinate outcomes and an ongoing disciplining of components and bodies. Automated technologies of the present emerge and are marked by this lineage of coercive modes of implementation, whose scarred history of techniques of discrimination, exploitation and extraction point to an archive of automated injustices in computing, a history that continues to charge present paradigms and practices of computing. This workshop addresses the history of coercive technologies through attuning to how we perform speculation within practices of computing through a renewed attention to this history. We go backwards into the archive, rather than racing forward and proposing ever new speculative futures of automation. This is because speculative creative approaches are often conceived and positioned as methodological toolkits for addressing computing practices by imagining for/with others for a \"future otherwise\". We argue that \"speculation\" as the easy-go-to of designers and artists trying to address automated injustices needs some undoing, as without work it will always be confined within ongoing legacies of coercive modes of computing practice. Instead of creating more just-worlds, the generation of ever-new futures by creative speculation often merely reinforces the project of coercive computing. For this workshop, drawing on queer approaches to resisting futures and informed by activist feminist engagements with archives, we invite participants to temporarily resist imagining futures and instead to speculate backwards. We speculate backwards to various moments, artefacts and practices within computing history. What does it mean to understand techniques of computing and automation as coercive infrastructures? How did so many of the dreams and seeming promises of computing turn into the coercive practices that we see today? Has computing as a practice become so imbued with coercive techniques that we find it hard to imagine otherwise? Together, we will build a speculative understanding and possible archive of non-coercive computing. In the words of Alexis Pauline Gumbs, the emerging archive proposes \"how did their dreams make rooms to dream in\"... or not, in the case of coercive practices of computing. And \"what if she changes her dream?\" What if we reboot this dream?1","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"823 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133287454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardwiring discriminatory police practices: the implications of data-driven technological policing on minority (ethnic and religious) people and communities 顽固的歧视性警察做法:数据驱动的技术警务对少数民族(种族和宗教)人和社区的影响
P. Williams, Eric Kind
{"title":"Hardwiring discriminatory police practices: the implications of data-driven technological policing on minority (ethnic and religious) people and communities","authors":"P. Williams, Eric Kind","doi":"10.1145/3351095.3375695","DOIUrl":"https://doi.org/10.1145/3351095.3375695","url":null,"abstract":"On data-based policing.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128896142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Can an algorithmic system be a 'friend' to a police officer's discretion?: ACM FAT 2020 translation tutorial 算法系统能成为警察自由裁量权的“朋友”吗?: ACM FAT 2020翻译教程
M. Oswald, David Powell
{"title":"Can an algorithmic system be a 'friend' to a police officer's discretion?: ACM FAT 2020 translation tutorial","authors":"M. Oswald, David Powell","doi":"10.1145/3351095.3375673","DOIUrl":"https://doi.org/10.1145/3351095.3375673","url":null,"abstract":"This tutorial aims to increase understanding of the importance of discretion in police decision-making. It will guide computer scientists, policy-makers, lawyers and others in considering practical and technical issues crucial to avoiding the prejudicial and instead develop algorithms that are supportive - a 'friend'- to legitimate discretionary decision-making. It combines explanation of the relevant law and related literature with discussion based upon deep operational experience in the area of preventative and protective policing work. Autonomy and discretion are fundamental to police work, not only in relation to strategy and policy but for day-to-day operational decisions taken by front line officers. Such discretion 'recognizes the fallibility of interfacing rules with their field of application.' (Hildebrandt 2016). This discretion is not unbounded however and English common law expects discretion to be exercised reasonably and fairly. Conversely, discretion must not be fettered unlawfully, by failing to take a relevant factor into account when making a decision, or by abdicating responsibility to another person, body or 'thing'. Algorithmic systems have the potential to contribute to factors relevant to the decision in question at the point of interaction between their outputs and the real-world outcome for the victim, offender and/or community. Algorithmic decision tools present a number of challenges to legitimate discretionary police decision-making. Unnuanced outputs could be highly influential on the human decision-maker (Cooke and Michie 2012) and may undermine discretionary power to deal with atypical cases and 'un-thought of' factors that rely upon uncodified knowledge (Oswald 2018). Practical and technical considerations will be crucial to developing MLA that are supportive to discretionary decision-making. These include the methodological approach, design of the humancomputer interface having regard the decision-maker's responsibility to give reasons for their decision, the avoidance of unnuanced or over-confident framing of results, understanding of the policing context in which the MLA will operate, and consideration of the implications of organisational culture and processes to the MLA's influence.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130719189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deconstructing FAT: using memories to collectively explore implicit assumptions, values and context in practices of debiasing and discrimination-awareness 解构FAT:利用记忆集体探索去偏见和歧视意识实践中的隐含假设、价值观和背景
Doris Allhutter, Bettina Berendt
{"title":"Deconstructing FAT: using memories to collectively explore implicit assumptions, values and context in practices of debiasing and discrimination-awareness","authors":"Doris Allhutter, Bettina Berendt","doi":"10.1145/3351095.3375688","DOIUrl":"https://doi.org/10.1145/3351095.3375688","url":null,"abstract":"Research in fairness, accountability, and transparency (FAT) in socio-technical systems needs to take into account how practices of computing are entrenched with power relations in complex and multi-layered ways. Trying to disentangle the way in which structural discrimination and normative computational concepts and methods are intertwined, this frequently raises the question of WHO are the actors that shape technologies and research agendas---who gets to speak and to define bias, (un)fairness, and discrimination? \"Deconstructing FAT\" is a CRAFT workshop that aims at complicating this question by asking how \"we\" as researchers in FAT (often unknowingly) mobilize implicit assumptions, values and beliefs that reflect our own embeddedness in power relations, our disciplinary ways of thinking, and our historically, locally, and culturally-informed ways of solving computational problems or approaching our research. This is a vantage point to make visible and analyze the normativity of technical approaches, concepts and methods that are part of the repertoire of FAT research. Inspired by a previous international workshop [1], this CRAFT workshop engages an interdisciplinary panel of FAT researchers in a deconstruction exercise that traces the following issues: (1) FAT research frequently speaks of social bias that is amplified by algorithmic systems, of the problem of discriminatory consequences that is to be solved, and of underprivileged or vulnerable groups that need to be protected. What does this perspectivity imply in terms of the approaches, methods and metrics that are being applied? How do methods of debiasing and discrimination-awareness enact the epistemic power of a perspective of privilege as their norm? (2) FAT research has emphasized the need for multi- or interdisciplinary approaches to get a grip on the complex intertwining of social power relations and the normativity of computational methods, norms and practices. Clearly, multi- and interdisciplinary research includes different normative frameworks and ways of thinking that need to be negotiated. This is complicated by the fact that these frameworks are not fully transparent and ready for reflection. What are the normative implications of interdisciplinary collaboration in FAT research? (3) While many problems of discrimination, marginalization and exploitation can be similar across places, they can also have specific local shapes. How can FAT research e.g. consider historically grown specifics such as the effects of different colonial histories? If these specifics make patterns of discrimination have different and more nuanced dimensions than clear-cut 'redlining', what does this imply? To explore these questions, we use the method of 'mind scripting' which is based in theories of discourse, ideology, memory and affect and aims at investigating hidden patterns of meaning making in written memories of the panelists [2]. The workshop strives to challenge some of the implicit norms an","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122962244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithmic accountability in public administration: the GDPR paradox 公共行政中的算法问责:GDPR悖论
Sunny Kang
{"title":"Algorithmic accountability in public administration: the GDPR paradox","authors":"Sunny Kang","doi":"10.1145/3351095.3373153","DOIUrl":"https://doi.org/10.1145/3351095.3373153","url":null,"abstract":"The EU General Data Protection Regulation (\"GDPR\") is often represented as a larger than life behemoth that will fundamentally transform the world of big data. Abstracted from its constituent parts of corresponding rights, responsibilities, and exemptions, the operative scope of the GDPR can be unduly aggrandized, when in reality, it caters to the specific policy objectives of legislators and institutional stakeholders. With much uncertainty ahead on the precise implementation of the GDPR, academic and policy discussions are debating the adequacy of protections for automated decision-making in GDPR Articles 13 (right to be informed of automated treatment), 15 (right of access by the data subject), and 22 (safeguards to profiling). Unfortunately, the literature to date disproportionately focuses on the impact of AI in the private sector, and deflects any extensive review of automated enforcement tools in public administration. Even though the GDPR enacts significant safeguards against automated decisions, it does so with deliberate design: to balance the interests of data protection with the growing demand for algorithms in the administrative state. In order to facilitate inter-agency data flows and sensitive data processing that fuel the predictive power of algorithmic enforcement tools, the GDPR decisively surrenders to the procedural autonomy of Member States to authorize these practices. Yet, due to a dearth of research on the GDPR's stance on government deployed algorithms, it is not widely known that public authorities can benefit from broadly worded exemptions to restrictions on automated decision-making, and even circumvent remedies for data subjects through national legislation. The potential for public authorities to invoke derogations from the GDPR must be contained by the fundamental guarantees of due process, judicial review, and equal treatment. This paper examines the interplay of these principles within the prospect of algorithmic decision-making by public authorities.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126315927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信