Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency最新文献

筛选
英文 中文
Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination 实践中的意识:为反歧视获取敏感属性数据的紧张关系
Miranda Bogen, A. Rieke, Shazeda Ahmed
{"title":"Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination","authors":"Miranda Bogen, A. Rieke, Shazeda Ahmed","doi":"10.1145/3351095.3372877","DOIUrl":"https://doi.org/10.1145/3351095.3372877","url":null,"abstract":"Organizations cannot address demographic disparities that they cannot see. Recent research on machine learning and fairness has emphasized that awareness of sensitive attributes, such as race and sex, is critical to the development of interventions. However, on the ground, the existence of these data cannot be taken for granted. This paper uses the domains of employment, credit, and healthcare in the United States to surface conditions that have shaped the availability of sensitive attribute data. For each domain, we describe how and when private companies collect or infer sensitive attribute data for antidiscrimination purposes. An inconsistent story emerges: Some companies are required by law to collect sensitive attribute data, while others are prohibited from doing so. Still others, in the absence of legal mandates, have determined that collection and imputation of these data are appropriate to address disparities. This story has important implications for fairness research and its future applications. If companies that mediate access to life opportunities are unable or hesitant to collect or infer sensitive attribute data, then proposed techniques to detect and mitigate bias in machine learning models might never be implemented outside the lab. We conclude that today's legal requirements and corporate practices, while highly inconsistent across domains, offer lessons for how to approach the collection and inference of sensitive data in appropriate circumstances. We urge stakeholders, including machine learning practitioners, to actively help chart a path forward that takes both policy goals and technical needs into account.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132953549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Explainability fact sheets: a framework for systematic assessment of explainable approaches 可解释性情况说明书:系统评估可解释性方法的框架
Kacper Sokol, Peter A. Flach
{"title":"Explainability fact sheets: a framework for systematic assessment of explainable approaches","authors":"Kacper Sokol, Peter A. Flach","doi":"10.1145/3351095.3372870","DOIUrl":"https://doi.org/10.1145/3351095.3372870","url":null,"abstract":"Explanations in Machine Learning come in many forms, but a consensus regarding their desired properties is yet to emerge. In this paper we introduce a taxonomy and a set of descriptors that can be used to characterise and systematically assess explainable systems along five key dimensions: functional, operational, usability, safety and validation. In order to design a comprehensive and representative taxonomy and associated descriptors we surveyed the eXplainable Artificial Intelligence literature, extracting the criteria and desiderata that other authors have proposed or implicitly used in their research. The survey includes papers introducing new explainability algorithms to see what criteria are used to guide their development and how these algorithms are evaluated, as well as papers proposing such criteria from both computer science and social science perspectives. This novel framework allows to systematically compare and contrast explainability approaches, not just to better understand their capabilities but also to identify discrepancies between their theoretical qualities and properties of their implementations. We developed an operationalisation of the framework in the form of Explainability Fact Sheets, which enable researchers and practitioners alike to quickly grasp capabilities and limitations of a particular explainable method. When used as a Work Sheet, our taxonomy can guide the development of new explainability approaches by aiding in their critical evaluation along the five proposed dimensions.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 219
The hidden assumptions behind counterfactual explanations and principal reasons 反事实解释和主要原因背后隐藏的假设
Solon Barocas, Andrew D. Selbst, Manish Raghavan
{"title":"The hidden assumptions behind counterfactual explanations and principal reasons","authors":"Solon Barocas, Andrew D. Selbst, Manish Raghavan","doi":"10.1145/3351095.3372830","DOIUrl":"https://doi.org/10.1145/3351095.3372830","url":null,"abstract":"Counterfactual explanations are gaining prominence within technical, legal, and business circles as a way to explain the decisions of a machine learning model. These explanations share a trait with the long-established \"principal reason\" explanations required by U.S. credit laws: they both explain a decision by highlighting a set of features deemed most relevant---and withholding others. These \"feature-highlighting explanations\" have several desirable properties: They place no constraints on model complexity, do not require model disclosure, detail what needed to be different to achieve a different decision, and seem to automate compliance with the law. But they are far more complex and subjective than they appear. In this paper, we demonstrate that the utility of feature-highlighting explanations relies on a number of easily overlooked assumptions: that the recommended change in feature values clearly maps to real-world actions, that features can be made commensurate by looking only at the distribution of the training data, that features are only relevant to the decision at hand, and that the underlying model is stable over time, monotonic, and limited to binary outcomes. We then explore several consequences of acknowledging and attempting to address these assumptions, including a paradox in the way that feature-highlighting explanations aim to respect autonomy, the unchecked power that feature-highlighting explanations grant decision makers, and a tension between making these explanations useful and the need to keep the model hidden. While new research suggests several ways that feature-highlighting explanations can work around some of the problems that we identify, the disconnect between features in the model and actions in the real world---and the subjective choices necessary to compensate for this---must be understood before these techniques can be usefully implemented.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116212016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 173
Roles for computing in social change 计算机在社会变革中的作用
Rediet Abebe, Solon Barocas, J. Kleinberg, K. Levy, Manish Raghavan, D. G. Robinson
{"title":"Roles for computing in social change","authors":"Rediet Abebe, Solon Barocas, J. Kleinberg, K. Levy, Manish Raghavan, D. G. Robinson","doi":"10.1145/3351095.3372871","DOIUrl":"https://doi.org/10.1145/3351095.3372871","url":null,"abstract":"A recent normative turn in computer science has brought concerns about fairness, bias, and accountability to the core of the field. Yet recent scholarship has warned that much of this technical work treats problematic features of the status quo as fixed, and fails to address deeper patterns of injustice and inequality. While acknowledging these critiques, we posit that computational research has valuable roles to play in addressing social problems --- roles whose value can be recognized even from a perspective that aspires toward fundamental social change. In this paper, we articulate four such roles, through an analysis that considers the opportunities as well as the significant risks inherent in such work. Computing research can serve as a diagnostic, helping us to understand and measure social problems with precision and clarity. As a formalizer, computing shapes how social problems are explicitly defined --- changing how those problems, and possible responses to them, are understood. Computing serves as rebuttal when it illuminates the boundaries of what is possible through technical means. And computing acts as synecdoche when it makes long-standing social problems newly salient in the public eye. We offer these paths forward as modalities that leverage the particular strengths of computational work in the service of social change, without overclaiming computing's capacity to solve social problems on its own.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121090595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 163
Towards a critical race methodology in algorithmic fairness 算法公平中的关键种族方法论
A. Hanna, Emily L. Denton, A. Smart, Jamila Smith-Loud
{"title":"Towards a critical race methodology in algorithmic fairness","authors":"A. Hanna, Emily L. Denton, A. Smart, Jamila Smith-Loud","doi":"10.1145/3351095.3372826","DOIUrl":"https://doi.org/10.1145/3351095.3372826","url":null,"abstract":"We examine the way race and racial categories are adopted in algorithmic fairness frameworks. Current methodologies fail to adequately account for the socially constructed nature of race, instead adopting a conceptualization of race as a fixed attribute. Treating race as an attribute, rather than a structural, institutional, and relational phenomenon, can serve to minimize the structural aspects of algorithmic unfairness. In this work, we focus on the history of racial categories and turn to critical race theory and sociological work on race and ethnicity to ground conceptualizations of race for fairness research, drawing on lessons from public health, biomedical research, and social survey research. We argue that algorithmic fairness researchers need to take into account the multidimensionality of race, take seriously the processes of conceptualizing and operationalizing race, focus on social processes which produce racial inequality, and consider perspectives of those most affected by sociotechnical systems.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130881665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 219
Value-laden disciplinary shifts in machine learning 机器学习中充满价值的学科转变
Ravit Dotan, S. Milli
{"title":"Value-laden disciplinary shifts in machine learning","authors":"Ravit Dotan, S. Milli","doi":"10.1145/3351095.3373157","DOIUrl":"https://doi.org/10.1145/3351095.3373157","url":null,"abstract":"As machine learning models are increasingly used for high-stakes decision making, scholars have sought to intervene to ensure that such models do not encode undesirable social and political values. However, little attention thus far has been given to how values influence the machine learning discipline as a whole. How do values influence what the discipline focuses on and the way it develops? If undesirable values are at play at the level of the discipline, then intervening on particular models will not suffice to address the problem. Instead, interventions at the disciplinary-level are required. This paper analyzes the discipline of machine learning through the lens of philosophy of science. We develop a conceptual framework to evaluate the process through which types of machine learning models (e.g. neural networks, support vector machines, graphical models) become predominant. The rise and fall of model-types is often framed as objective progress. However, such disciplinary shifts are more nuanced. First, we argue that the rise of a model-type is self-reinforcing-it influences the way model-types are evaluated. For example, the rise of deep learning was entangled with a greater focus on evaluations in compute-rich and data-rich environments. Second, the way model-types are evaluated encodes loaded social and political values. For example, a greater focus on evaluations in compute-rich and data-rich environments encodes values about centralization of power, privacy, and environmental concerns.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132507840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy 从伦理洗涤到伦理抨击:道德哲学视野下的技术伦理观
Elettra Bietti
{"title":"From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy","authors":"Elettra Bietti","doi":"10.1145/3351095.3372860","DOIUrl":"https://doi.org/10.1145/3351095.3372860","url":null,"abstract":"The word 'ethics' is under siege in technology policy circles. Weaponized in support of deregulation, self-regulation or handsoff governance, \"ethics\" is increasingly identified with technology companies' self-regulatory efforts and with shallow appearances of ethical behavior. So-called \"ethics washing\" by tech companies is on the rise, prompting criticism and scrutiny from scholars and the tech community at large. In parallel to the growth of ethics washing, its condemnation has led to a tendency to engage in \"ethics bashing.\" This consists in the trivialization of ethics and moral philosophy now understood as discrete tools or pre-formed social structures such as ethics boards, self-governance schemes or stakeholder groups. The misunderstandings underlying ethics bashing are at least threefold: (a) philosophy and \"ethics\" are seen as a communications strategy and as a form of instrumentalized cover-up or façade for unethical behavior, (b) philosophy is understood in opposition and as alternative to political representation and social organizing and (c) the role and importance of moral philosophy is downplayed and portrayed as mere \"ivory tower\" intellectualization of complex problems that need to be dealt with in practice. This paper argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of \"ethics washing,\" or by scholars and policy-makers in the form of \"ethics bashing.\" Grappling with the role of philosophy and ethics requires moving beyond both tendencies and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. In other words, we must resist narrow reductivism of moral philosophy as instrumentalized performance and renew our faith in its intrinsic moral value as a mode of knowledgeseeking and inquiry. Far from mandating a self-regulatory scheme or a given governance structure, moral philosophy in fact facilitates the questioning and reconsideration of any given practice, situating it within a complex web of legal, political and economic institutions. Moral philosophy indeed can shed new light on human practices by adding needed perspective, explaining the relationship between technology and other worthy goals, situating technology within the human, the social, the political. It has become urgent to start considering technology ethics also from within and not only from outside of ethics.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132145092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 136
The relationship between trust in AI and trustworthy machine learning technologies 人工智能中的信任与可信赖的机器学习技术之间的关系
Ehsan Toreini, M. Aitken, Kovila P. L. Coopamootoo, Karen Elliott, Carlos Vladimiro Gonzalez Zelaya, A. Moorsel
{"title":"The relationship between trust in AI and trustworthy machine learning technologies","authors":"Ehsan Toreini, M. Aitken, Kovila P. L. Coopamootoo, Karen Elliott, Carlos Vladimiro Gonzalez Zelaya, A. Moorsel","doi":"10.1145/3351095.3372834","DOIUrl":"https://doi.org/10.1145/3351095.3372834","url":null,"abstract":"To design and develop AI-based systems that users and the larger public can justifiably trust, one needs to understand how machine learning technologies impact trust. To guide the design and implementation of trusted AI-based systems, this paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products. We start from the ABI+ (Ability, Benevolence, Integrity, Predictability) framework augmented with a recently proposed mapping of ABI+ on qualities of technologies that support trust. We consider four categories of trustworthiness technologies for machine learning, namely these for Fairness, Explainability, Auditability and Safety (FEAS) and discuss if and how these support the required qualities. Moreover, trust can be impacted throughout the life cycle of AI-based systems, and we therefore introduce the concept of Chain of Trust to discuss trustworthiness technologies in all stages of the life cycle. In so doing we establish the ways in which machine learning technologies support trusted AI-based systems. Finally, FEAS has obvious relations with known frameworks and therefore we relate FEAS to a variety of international 'principled AI' policy and technology frameworks that have emerged in recent years.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126560125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 169
"The human body is a black box": supporting clinical decision-making with deep learning “人体是黑盒子”:用深度学习支持临床决策
M. Sendak, M. C. Elish, M. Gao, Joseph D. Futoma, W. Ratliff, M. Nichols, A. Bedoya, S. Balu, Cara O'Brien
{"title":"\"The human body is a black box\": supporting clinical decision-making with deep learning","authors":"M. Sendak, M. C. Elish, M. Gao, Joseph D. Futoma, W. Ratliff, M. Nichols, A. Bedoya, S. Balu, Cara O'Brien","doi":"10.1145/3351095.3372827","DOIUrl":"https://doi.org/10.1145/3351095.3372827","url":null,"abstract":"Machine learning technologies are increasingly developed for use in healthcare. While research communities have focused on creating state-of-the-art models, there has been less focus on real world implementation and the associated challenges to fairness, transparency, and accountability that come from actual, situated use. Serious questions remain underexamined regarding how to ethically build models, interpret and explain model output, recognize and account for biases, and minimize disruptions to professional expertise and work cultures. We address this gap in the literature and provide a detailed case study covering the development, implementation, and evaluation of Sepsis Watch, a machine learning-driven tool that assists hospital clinicians in the early diagnosis and treatment of sepsis. Sepsis is a severe infection that can lead to organ failure or death if not treated in time and is the leading cause of inpatient deaths in US hospitals. We, the team that developed and evaluated the tool, discuss our conceptualization of the tool not as a model deployed in the world but instead as a socio-technical system requiring integration into existing social and professional contexts. Rather than focusing solely on model interpretability to ensure fair and accountable machine learning, we point toward four key values and practices that should be considered when developing machine learning to support clinical decision-making: rigorously define the problem in context, build relationships with stakeholders, respect professional discretion, and create ongoing feedback loops with stakeholders. Our work has significant implications for future research regarding mechanisms of institutional accountability and considerations for responsibly designing machine learning systems. Our work underscores the limits of model interpretability as a solution to ensure transparency, accuracy, and accountability in practice. Instead, our work demonstrates other means and goals to achieve FATML values in design and in practice.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123464008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 121
The disparate equilibria of algorithmic decision making when individuals invest rationally 个体理性投资时算法决策的异类均衡
Lydia T. Liu, Ashia C. Wilson, Nika Haghtalab, A. Kalai, C. Borgs, J. Chayes
{"title":"The disparate equilibria of algorithmic decision making when individuals invest rationally","authors":"Lydia T. Liu, Ashia C. Wilson, Nika Haghtalab, A. Kalai, C. Borgs, J. Chayes","doi":"10.1145/3351095.3372861","DOIUrl":"https://doi.org/10.1145/3351095.3372861","url":null,"abstract":"The long-term impact of algorithmic decision making is shaped by the dynamics between the deployed decision rule and individuals' response. Focusing on settings where each individual desires a positive classification---including many important applications such as hiring and school admissions, we study a dynamic learning setting where individuals invest in a positive outcome based on their group's expected gain and the decision rule is updated to maximize institutional benefit. By characterizing the equilibria of these dynamics, we show that natural challenges to desirable long-term outcomes arise due to heterogeneity across groups and the lack of realizability. We consider two interventions, decoupling the decision rule by group and subsidizing the cost of investment. We show that decoupling achieves optimal outcomes in the realizable case but has discrepant effects that may depend on the initial conditions otherwise. In contrast, subsidizing the cost of investment is shown to create better equilibria for the disadvantaged group even in the absence of realizability.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"7 21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126162911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信